20 research outputs found

    A reaction diffusion-like formalism for plastic neural networks reveals dissipative solitons at criticality

    Get PDF
    Self-organized structures in networks with spike-timing dependent plasticity (STDP) are likely to play a central role for information processing in the brain. In the present study we derive a reaction-diffusion-like formalism for plastic feed-forward networks of nonlinear rate neurons with a correlation sensitive learning rule inspired by and being qualitatively similar to STDP. After obtaining equations that describe the change of the spatial shape of the signal from layer to layer, we derive a criterion for the non-linearity necessary to obtain stable dynamics for arbitrary input. We classify the possible scenarios of signal evolution and find that close to the transition to the unstable regime meta-stable solutions appear. The form of these dissipative solitons is determined analytically and the evolution and interaction of several such coexistent objects is investigated

    A unified view on weakly correlated recurrent networks

    Get PDF
    The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The classes differ in the location of additive noise in the rate dynamics, which is on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the presence of conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra

    A learning rule balancing energy consumption and information maximization in a feed-forward neuronal network

    Get PDF
    Information measures are often used to assess the efficacy of neural networks, and learning rules can be derived through optimization procedures on such measures. In biological neural networks, computation is restricted by the amount of available resources. Considering energy restrictions, it is thus reasonable to balance information processing efficacy with energy consumption. Here, we studied networks of non-linear Hawkes neurons and assessed the information flow through these networks using mutual information. We then applied gradient descent for a combination of mutual information and energetic costs to obtain a learning rule. Through this procedure, we obtained a rule containing a sliding threshold, similar to the Bienenstock-Cooper-Munro rule. The rule contains terms local in time and in space plus one global variable common to the whole network. The rule thus belongs to so-called three-factor rules and the global variable could be related to a number of biological processes. In neural networks using this learning rule, frequent inputs get mapped onto low energy orbits of the network while rare inputs aren't learned

    Theory of mutual interaction of activity and connectivity in plastic neural networks

    No full text
    The subject under consideration in this work is the interplay between activity in neuronal networks and network connectivity. Correlated neural activity is a known feature of the brain and evidence is increasing that it is closely linked to information processing. Changes in synaptic weights, which seem to be an essential mechanism of long-term memory, are coupled to the covariance of activity by spike-time-dependent plasticity (STDP). Conversely, changes of the connectivity influence the covariance structure, so that both dynamics - neuronal and synaptic - are mutually coupled.To investigate this interplay we first consider non-plastic networks to see how the connectivity determines the covariance of the activity. To simplify the consideration, we linearize the neuron models at their working points, effectively mapping them onto a linear noisy rate model. To distinguish generic features from model specific properties, we investigate how different often used point neuron models - the leaky-integrate-and-fire model, the Hawkes process, the binary (Ising) model - correspond to each other within the linear approximation. Stochasticity in neuronal networks enables such a mapping also for discontinuous non-linearities of the neuron model. We then investigate the inverse problem: obtaining the connectivity from the given covariance. The developed method can help us to construct networks with desired features and to reconstruct the connectivity from measured biological data. The obtained dependence of the covariance on the connectivity and the mapping of more complex neuron models onto analytically tractable noisy rate models creates the basis for the following research. In the sequel, we consider plastic networks in which activity influences connectivity. We apply a top-down approach to derive learning rules from the optimization of an underlying principle - in this case the maximization of irreversibility - using a path integral formulation for the linear-nonlinear noisy rate neurons. We investigate the evolution of the network underlying these learning rules and find its ability to infer the hidden causal relations by increasing direct connections due to indirect connections. The evolution of the weights is saturated by the neurons' non-linearity, so that in feed-forward networks synapses align parallel to the eigenvectors of the covariance matrix of the driving presynaptic neurons, a known feature also existent in simpler neural systems. The obtained learning rules resemble STDP with a narrow antisymmetric learning window. Applying a bottom-up approach, we also investigate Hebbian-like learning rules corresponding to the opposite case - a wide symmetric learning window . The developed approach is suitable for arbitrary rate dependent learning rules, allowing the application to e.g. leaky-integrate-and-fire neurons. Such networks with spatially homogeneous organization are well understood, so we here investigate spatially extended systems of excitatory neurons with distance dependent connectivity. We essentially extend the reaction-diffusion-like formalism known to describe such systems with non-plastic synapses to be applicable to equilibrium states of systems with plasticity. We find system parameters separating stable and unstable regimes and analytically obtain dissipative soliton solutions appearing in the system at the transition between these regimes, provided sufficiently high input signals are applied. We describe possible scenarios of interaction of such solutions if several of them coexist. Their ability to unite to a single one, which, in turn, underlies the same conditions, resembles the behavior of the elements of an association tree. The developed formalism with some generalizations can also be applied to systems including inhibitory neurons, promising further new results

    A learning rule balancing energy consumption and information maximization in a feed-forward neuronal network

    No full text
    Information measures are often used to assess the efficacy of neural networks, and learning rules can be derived through optimization procedures on such measures. In biological neural networks, computation is restricted by the amount of available resources. Considering energy restrictions, it is thus reasonable to balance information processing efficacy with energy consumption. Here, we studied networks of non-linear Hawkes neurons and assessed the information flow through these networks using mutual information. We then applied gradient descent for a combination of mutual information and energetic costs to obtain a learning rule. Through this procedure, we obtained a rule containing a sliding threshold, similar to the Bienenstock-Cooper-Munro rule. The rule contains terms local in time and in space plus one global variable common to the whole network. The rule thus belongs to so-called three-factor rules and the global variable could be related to a number of biological processes. In neural networks using this learning rule, frequent inputs get mapped onto low energy orbits of the network while rare inputs aren't learned
    corecore